35 research outputs found

    Bases cérébrales du conflit visuo-auditif spatial et sémantique : études en IRM fonctionnelle et EEG

    Get PDF
    Pour interagir de manière efficace avec notre environnement, nous créons une représentation du monde dans lequel nous évoluons à partir d'informations issues de plusieurs de nos sens. Lorsque ces informations sont contradictoires, le conflit engendré peut se manifester par une dégradation du niveau de notre performance. Dans ce travail de recherche, nous nous sommes intéressés au conflit visuo-auditif et à l'une de ses conséquences : l'insensibilité aux alarmes auditives parfois observée chez les pilotes d'avion. Les questions qui ont motivé cette recherche sont les suivantes : Quelles sont les structures cérébrales impliquées dans la gestion de ce conflit ? A quelle étape des processus sensorimoteurs et cognitifs ces différentes structures interviennent-elles ? Quels processus physiologiques peuvent expliquer l'insensibilité aux alarmes sonores observée chez le pilote d'avion dans une phase de vol cruciale, l'atterrissage ? Nous avons mis en évidence que la gestion du conflit visuo-auditif implique un réseau cérébral pariéto-frontal supramodal. Nous avons également montré que des mécanismes attentionnels et pré-attentionnels peuvent modifier la façon dont le cerveau perçoit certaines informations, et ce, à travers des interactions multimodales à différentes échelles du traitement sensori-moteur. De plus, nous proposons un modèle simplifié du fonctionnement du réseau cérébral impliqué dans la gestion du conflit visuo-auditif. Enfin, nous avons pu mettre en évidence que l'insensibilité aux alarmes chez le pilote était à la fois liée à une modulation pré-attentionnelle du signal auditif, et à la complexité de la prise de décision dans un contexte difficile.The human cognitive system receives environmental information through multiple sensory channels. Most of the time, the channels provide congruent content, the integration of which helps build an unified perception of the world, but sometimes the environment provides inconsistent stimuli that perturb efficient interpretation. These situations generate a conflict associated with a behavioral cost, and sometimes severe consequences. In this research, we were interested in the visuo-auditory conflict and one of its consequences: the insensitivity to auditory alarms, sometimes observed in pilots. These were the questions that motivated this research: What are the brain structures involved in this conflict management? When do they act during the sensorimotor and cognitive processes? What are the physiological processes that may explain the insensitivity to auditory alarms sometimes observed in pilots during the landing, phase of the flight? We showed that the parieto-frontal network involved in unimodal conflict management is also involved in visual-auditory conflict. We also showed that both attentional and pre-attentional mechanisms can modify our perception. These modulations occur through multimodal interactions at different levels of the sensorimotor processing. Lastly, we showed that the insensitivity to alarms in pilots was related to both a pre-attentional modulation of auditory signal and to decision making difficulty in a complex environment. We finally proposed a simplified model of the functioning of the brain network involved in visual-auditory conflict management

    Évaluation et modulation des fonctions exécutives en neuroergonomie - Continuums cognitifs et expérimentaux

    Get PDF
    Des études en neuroergonomie ont montré que le pilote d’avion pouvait commettre des erreurs en raison d’une incapacité transitoire à faire preuve de flexibilité mentale. Il apparait que certains facteurs, tels qu’une forte charge mentale ou une pression temporelle importante, un niveau de stress trop élevé, la survenue de conflits, ou une perte de conscience de la situation, peuvent altérer temporairement l’efficience des fonctions exécutives permettant cette flexibilité. Depuis mes travaux initiaux, dans lesquels je me suis intéressé aux conditions qui conduisent à une négligence auditive, j’ai souhaité développer une approche scientifique visant à quantifier et limiter les effets délétères de ces différents facteurs. Ceci a été fait à travers l’étude des fonctions exécutives chez l’être humain selon le continuum cognitif (du cerveau lésé au cerveau en parfait état de fonctionnement) et le continuum expérimental (de l’ordinateur au monde réel). L’approche fondamentale de l’étude des fonctions exécutives en neurosciences combinée à l’approche neuroergonomique graduelle avec des pilotes et des patients cérébro-lésés, a permis de mieux comprendre la manière dont ces fonctions sont mises en jeu et altérées. Cette connaissance à contribuer par la suite à la mise en place de solutions pour préserver leur efficacité en situation complexe. Après avoir rappelé mon parcours académique, je présente dans ce manuscrit une sélection de travaux répartis sur trois thématiques de recherche. La première concerne l’étude des fonctions exécutives impliquées dans l’attention et en particulier la façon dont la charge perceptive et la charge mentale peuvent altérer ces fonctions. La deuxième correspond à un aspect plus appliqué de ces travaux avec l’évaluation de l’état du pilote. Il a été question d’analyser cet état selon l’activité de pilotage elle-même ou à travers la gestion et la supervision d’un système en particulier. La troisième et dernière thématique concerne la recherche de marqueurs prédictifs de la performance cognitive et l’élaboration d’entraînements cognitifs pour limiter les troubles dysexécutifs, qu’ils soient d’origine contextuelle ou lésionnelle. Ces travaux ont contribué à une meilleure compréhension des troubles cognitifs transitoires ou chroniques, mais ils ont aussi soulevé des questions auxquelles je souhaite répondre aujourd’hui. Pour illustrer cette réflexion, je présente en dernière partie de ce document mon projet de recherche qui vise à développer une approche multifactorielle de l’efficience cognitive, éthique et en science ouverte

    Real-Time State Estimation in a Flight Simulator Using fNIRS

    Get PDF
    Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development

    "Automation Surprise" in Aviation

    Get PDF
    Conflicts between the pilot and the automation, when pilots detect but do not understand them, cause “automation sur- prise” situations and jeopardize flight safety. We conducted an experiment in a 3-axis motion flight simulator with 16 pi- lots equipped with an eye-tracker to analyze their behavior and eye movements during the occurrence of such a situation. The results revealed that this conflict engages participant’s at- tentional abilities resulting in excessive and inefficient visual search patterns. This experiment confirmed the crucial need to design solutions for detecting the occurrence of conflict- ual situations and to assist the pilots. We therefore proposed an approach to formally identify the occurrence of “automa- tion surprise” conflicts based on the analysis of “silent mode changes” of the autopilot. A demonstrator was implemented and allowed for the automatic trigger of messages in the cock- pit that explains the autopilot behavior. We implemented a real-time demonstrator that was tested as a proof-of-concept with 7 subjects facing 3 different conflicts with automation. The results shown the efficacy of this approach which could be implemented in existing cockpits

    Effects of the audiovisual conflict on auditory early processes

    Get PDF
    Auditory alarm misperception is one of the critical events that lead aircraft pilots to an erroneous flying decision. The rarity of these alarmsassociatedwith their possible unreliabilitymay play a role in thismisperception. In order to investigate this hypothesis, we manipulated both audiovisual conflict and sound rarity in a simplified landing task. Behavioral data and event related potentials (ERPs) of thirteen healthy participants were analyzed. We found that the presentation of a rare auditory signal (i.e. an alarm), incongruent with visual information, led to a smaller amplitude of the auditory N100 (i.e. less negative) compared to the condition in which both signals were congruent.Moreover, the incongruity between the visual information and the rare sound did not significantly affect reaction times, suggesting that the rare sound was neglected. We propose that the lower N100 amplitude reflects an early visual-to-auditory gating that depends on the rarity of the sound. In complex aircraft environments, this early effect might be partly responsible for auditory alarm insensitivity. Our results provide a new basis for future aeronautic studies and the development of countermeasures

    Physiological Assessment of Engagement during HRI: Impact of Manual vs Automatic Mode

    Get PDF
    The employment of physiological measurements to perform on-line assessment of operators' mental states is crucial in the field of human-robot interaction (HRI) researches and is still an open topic to the best of our knowledge. In order to progress towards systems that would dynamically adapt to operators' mental states, a first step is to determine an adequate protocol that elicits variations in engagement. To this purpose, this work focuses on analyzing operator's physiological data streams recorded during a human-robot mission executed in a original virtual environment [Drougard17]. In detail, the conceived mission consists in a mutual cooperation of a firefighter robot and its human operator to extinguish fires. A high level of complexity is obtained by the number and the random nature of events to be handled during the mission. As an example, guiding the robot, managing its water tank level and taking care of its electric charge are all tasks to be accomplished simultaneously, which can randomly be assigned to autonomous or manually operative mode. In addition to that, an extra task consisting in keeping an adequate level of an external water tank to allow robot refill is assigned only to the human operator for continuously soliciting his attention. Anyone can experience this mission by visiting the website robot-isae.isae.fr set up to collect a large amount of behavioral data by crowdsourcing. The mission is accomplished through a remote human-machine interface made of controllers and a screen displaying different areas corresponding to each task. Figure 1 shows the graphical user interface, with the 5 areas of interest (AOIs). The control station is equipped with sensing devices for human data collecting. The sensing devices which allows for human data collection are an eye-tracker (SMI), located on the bottom bar of the display and a portable bluetooth electrocardiograph (eMotion Faros 360). A specific procedure for the experiment has been defined to guarantee a statistical significance of the recorded dataset. In detail, each different human operator has to complete at least three times a ten minutes mission aiming to the best score in terms of fires extinguished. An absolute resting is imposed between missions to get a baseline for the cardiac activity. The data are collected on 17 participants of mixed sex (9 females) with average age 28.5 (S.D. = 4.52). The number and the duration of fixations per area of interest are extracted from the eye-tracker. The length of inter-beat intervals, the Heart Rate Variability (HRV) and instant Heart Rate Variability (IHRV) are computed from the ECG. Preliminary results show a lower HRV and IHRV during the mission than during the rest session: Student and Wilcoxon statistical tests ensure a difference at least equal to 6 (p<0.05). This evidence, according to the literature, highlights that the created mission succeeded in engaging the participants. The impact of each mode of operation (manual/autonomous) on the human markers is also observed and analyzed. Contrarily to expectations the operator results more engaged (lower HRV) during autonomous than manual mode (p<0.05). This is in accordance with the tasks' difficulty. In fact, when the autonomous mode takes over, the human priority is the only task that he has to accomplish by himself (external tank filling), which is also designed to be the hardest one. This is confirmed by IHRV that in average is greater during manual mode. Moreover, finding that HRV and IHRV behaving in the same way, a real-time Human-Robot team supervision application can be foreseen. The effect of the current mode of operation is observable also on the number and durations of fixations on the two main AOIs: video streamed from the robot and external water tank level. The first attracts the operator attention mainly when in manual mode, the second when in autonomous mode (p<0.05). Spearman correlations of data sample per second confirm previous results. Indeed, markers on these AOIs are correlated with the mode of the robot (rho=0.22, p<0.05) as well as IHRV (rho=0.03, p<0.05). Several kind of correlation have been identified analyzing the recorded dataset, of which the most significant to describe the human operator engagement are found to be: the number and durations of fixations on AOIs corresponding to the two main tasks are negatively correlated (rho=-0.6, p<0.05) which describes the fact that the human tends to switch attention mainly between these two tasks; the correlation of IHRV marker to the markers on the AOIs (tank: rho=0.1, p<0.05; video: rho=-0.06, p<0.05) demonstrates that the main tasks are perceived as so from the human operator, while its correlation to the remaining mission time (rho=0.05, p<0.05) expresses a higher engagement as the mission progresses. Moreover, IHRV is also negatively correlated to performance indexes as the number of extinguished fires (rho=-0.07, p<0.05) and the external tank level (rho=-0.14, p<0.05) meaning that the human operator has higher engagement level when successfully accomplishing the mission. The outcomes of the proposed research confirm that human-robot interaction mission implies mental states variation that corresponds to levels of engagement. The demonstrated link between the sampled correlations and the global statistical analyses, returning relevant information on the human operator behavior, validates the possibility of using these markers for on-line applications. The effect of the alternation of manual and autonomous mode during the mission has been quantified on the markers and paves the way for automatic tasks allocation by a decisional system based on physiological data classification. Finally, the results obtained through these experiments demonstrate the validity of the overall approach proposed and the designed virtual environment. Further statistical analyses and the employment of additional physiological measurements such as electroencephalogram (EEG) are foreseen for the next future

    Inattentional deafness to auditory alarms: Inter-individual differences, electrophysiological signature and single trial classification

    Get PDF
    Inattentional deafness can have deleterious consequences in complex real-life situations (e.g. healthcare, aviation) leading to miss critical auditory signals. Such failure of auditory attention is thought to rely on top-down biasing mechanisms at the central executive level. A complementary approach to account for this phenomenon is to consider the existence of visual dominance over hearing that could be implemented via direct visual-to-auditory pathways. To investigate this phenomenon, thirteen aircraft pilots, equipped with a 32-channel EEG system, faced a low and high workload scenarii along with an auditory oddball task in a motion flight simulator. Prior to the flying task, the pilots were screened to assess their working memory span and visual dominance susceptibility. The behavioral results disclosed that the volunteers missed 57.7% of the auditory alarms in the difficult condition. Among all evaluated capabilities, only the visual dominance index was predictive of the miss rate in the difficult scenario. These findings provide behavioral evidences that other early cross-modal competitive process than top down modulation process could account for inattentional deafness. The electrophysiological analyses showed that the miss over the hit alarms led to a significant amplitude reduction of early perceptual (N100) and late attentional (P3a and P3b) event-related potentials components. Eventually, we implemented an EEG-based processing pipeline to perform single-trial classification of inattentional deafness. The results indicate that this processing chain could be used in an ecological setting as it led to 72.2% mean accuracy to discriminate missed from hit auditory alarms

    Modeling approach to multi-agent system of human and machine agents: Application in design of early experiments for novel aeronautics systems

    Get PDF
    Design of future systems for flight-deck automation will reflect a trend of changing the paradigm of human-computer interaction from the master (human)- slave (machine) mode to more equilibrated cooperation. In many cases such cooperation considers several humans and computer systems, for which multi-agent dynamic cooperative systems are appropriate models. Development of such systems requires very profound analysis of mutual interactions and conflicts that may arise in such systems. Additional testing is exhaustive and expensive for such systems. In the scope of the D3CoS project these problems are addressed from the modelling point of view with ambition to create tools that will simplify the development phase and replace parts of the testing phase. In this paper we investigate common flight procedures, for which computer assistance could be developed. We show how formal modelling of procedures allows us to inspect procedural inconsistencies and workload peaks before the development starts. We show how a computer cognitive architecture (a virtual pilot) can simulate human pilot behaviour in the cockpit to address questions typical for the early phase of the development. Analysis of these questions allows us to reduce the number of candidates for the final implementation without the need of expensive experiments with human pilots. This modelling approach is demonstrated on experiments undertaken both with human pilots and a virtual pilot. The quality of the outcome from both experimental settings remains conserved as shown by physiological assessment of pilot workload, which in turn justifies the use of the modelling approach for this type of problems

    The Spatial Release of Cognitive Load in Cocktail Party Is Determined by the Relative Levels of the Talkers

    Get PDF
    In a multi-talker situation, spatial separation between talkers reduces cognitive processing load: this is the “spatial release of cognitive load”. The present study investigated the role played by the relative levels of the talkers on this spatial release of cognitive load. During the experiment, participants had to report the speech emitted by a target talker in the presence of a concurrent masker talker. The spatial separation (0° and 120° angular distance in azimuth) and the relative levels of the talkers (adverse, intermediate, and favorable target-to-masker ratio) were manipulated. The cognitive load was assessed with a prefrontal functional near-infrared spectroscopy. Data from 14 young normal- hearing listeners revealed that the target-to-masker ratio had a direct impact on the spatial release of cognitive load. Spatial separation significantly reduced the prefrontal activity only for the intermediate target-to-masker ratio and had no effect on prefrontal activity for the favorable and the adverse target-to-masker ratios. Therefore, the relative levels of the talkers might be a key point to determine the spatial release of cognitive load and more specifically the prefrontal activity induced by spatial cues in multi- talker environments

    Global difficulty modulates the prioritization strategy in multitasking situations

    Get PDF
    There has been a considerable amount of research to conceptualize how cognition handle multitasking situations. Despite these efforts, it is still not clear how task parameters shape attentionnal resources allocation. For instance, many research have suggested that difficulty levels could explain these conflicting observations and very few have considered other factors such as task importance. In the present study, twenty participants had to carry out two N-Back tasks simultaneously, each subtask having distinct difficulty (0,1 or 2-Back) and importance (1 or 3 points) levels. Participants's cumulative dwell time were collected to assess their attentional strategies. Results showed that depending on the global level of difficulty (combination of the two levels of difficulty), attentional resources of people were driven either by the subtask difficulty (under low-global-difficulty) or the subtask importance (under high-global-difficulty), in a non-compensatory way. We discussed these results in terms of decision-making heuristics and metacognition
    corecore